110 research outputs found

    A nonmanipulable test

    Full text link
    A test is said to control for type I error if it is unlikely to reject the data-generating process. However, if it is possible to produce stochastic processes at random such that, for all possible future realizations of the data, the selected process is unlikely to be rejected, then the test is said to be manipulable. So, a manipulable test has essentially no capacity to reject a strategic expert. Many tests proposed in the existing literature, including calibration tests, control for type I error but are manipulable. We construct a test that controls for type I error and is nonmanipulable.Comment: Published in at http://dx.doi.org/10.1214/08-AOS597 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Falsifiability

    Get PDF
    We examine the fundamental concept of Popper’s falsifiability within an economic model in which a tester hires a potential expert to produce a theory. Payments are made contingent on the performance of the theory vis-a-vis future realizations of the data. We show that if experts are strategic, then falsifiability has no power to distinguish legitimate scientific theories from worthless theories. We also show that even if experts are strategic there are alternative criteria that can distinguish legitimate from worthless theories.Testing Strategic Experts

    Contracts and uncertainty

    Get PDF
    A decision maker, named Alice, wants to know if an expert has significant information about payoff-relevant probabilities of future events. The expert, named Bob, either knows this probability almost perfectly or knows nothing about it. Hence, both Alice and the uninformed expert face uncertainty: they do not know the payoff-relevant probability. Alice offers a contract to Bob. If he accepts this contract then he must announce the probability distribution before any data are observed. Once the data unfold, transfers between Alice and Bob occur. It is demonstrated that if the informed expert accepts some contract then the uninformed expert also accepts this contract. Hence, Alice's adverse selection problem cannot be mitigated by screening contracts that separate informed from uninformed experts. This result stands in contrast with the analysis of contracts under risk, where separation is often feasible.Contracts, uncertainty, experts, minmax theorems

    Manipulability of Future-Independent Tests

    Get PDF
    The difficulties in properly anticipating key economic variables may encourage decision makers to rely on experts’ forecasts. Professional forecasters, however, may not be reliable and so their forecasts must be empirically tested. This may induce experts to forecast strategically in order to pass the test. A test can be ignorantly passed if a false expert, with no knowledge of the data generating process, can pass the test. Many tests that are unlikely to reject correct forecasts can be ignorantly passed. Tests that cannot be ignorantly passed do exist, but these tests must make use of predictions contingent on data not yet observed at the time the forecasts are rejected. Such tests cannot be run if forecasters report only the probability of the next period’s events on the basis of the actually observed data. This result shows that it is difficult to dismiss false, but strategic, experts who know how theories are tested. This result also shows an important role that can be played by predictions contingent on data not yet observed.Testing Strategic Experts

    Strategic Manipulation of Empirical Tests

    Get PDF
    Theories can be produced by experts seeking a reputation for having knowledge. Hence, a tester could anticipate that theories may have been strategically produced by uninformed experts who want to pass an empirical test. We show that, with no restriction on the domain of permissible theories, strategic experts cannot be discredited for an arbitrary but given number of periods, no matter which test is used (provided that the test does not reject the actual data-generating process). Natural ways around this impossibility result include 1) assuming that unbounded data sets are available and 2) restricting the domain of permissible theories (opening the possibility that the actual data-generating process is rejected out of hand). In both cases, it is possible to dismiss strategic experts, but only to a limited extent. These results show significant limits on what data can accomplish when experts produce theories strategically.Testing Strategic Experts

    Strategic Manipulation of Empirical Tests

    Get PDF
    Theories can be produced by individuals seeking a good reputation of knowledge. Hence, a significant question is how to test theories anticipating that they might have been produced by (potentially uninformed) experts who prefer their theories not to be rejected. If a theory that predicts exactly like the data generating process is not rejected with high probability then the test is said to not reject the truth. On the other hand, if a false expert, with no knowledge over the data generating process, can strategically select theories that will not be rejected then the test can be ignorantly passed. These tests have limited use because they cannot feasibly dismiss completely uninformed experts. Many tests proposed in the literature (e.g., calibration tests) can be ignorantly passed. Dekel and Feinberg (2006) introduced a class of tests that seemingly have some power of dismissing uninformed experts. We show that some tests from their class can also be ignorantly passed. One of those tests, however, does not reject the truth and cannot be ignorantly passed. Thus, this empirical test can dismiss false experts.We also show that a false reputation of knowledge can be strategically sustained for an arbitrary, but given, number of periods, no matted which test is used (provided that it does not reject the truth). However, false experts can be discredited, even with bounded data sets, if the domain of permissible theories is mildly restricted.

    Non-Bayesian Updating : A Theoretical Framework

    Get PDF
    This paper models an agent in an infinite horizon setting who does not update according to Bayes' Rule, and who is self-aware and anticipates her updating behavior when formulating plans. Choice-theoretic axiomatic foundations are provided. Then the model is specialized axiomatically to capture updating biases that reflect excessive weight given to (i) prior beliefs, or alternatively, (ii) the realized sample. Finally, the paper describes a counterpart of the exchangeable Bayesian model, where the agent tries to learn about parameters, and some answers are provided to the question "what does a non-Bayesian updater learn?"non-Bayesian updating, overreaction, underreaction, confirmatory bias, law of small numbers, gambler's fallacy, hot hand fallacy, temptation, self-control, learning, menus

    Non-Bayesian Updating: a Theoretical Framework

    Get PDF
    This paper models an agent in a multi-period setting who does not update according to Bayes. Rule, and who is self-aware and anticipates her updating behavior when formulating plans. Choice-theoretic axiomatic foundations are provided. Then the model is specialized axiomatically to capture updating biases that re.ect excessive weight given to (i) prior be- liefs, or alternatively, (ii) the realized sample. Finally, the paper describes a counterpart of the exchangeable Bayesian model, where the agent tries to learn about parameters, and some answers are provided to the question, "what does a non-Bayesian updater learn?"skewed returns

    Overconfidence, Insurance and Paternalism

    Get PDF
    It is well known that when agents are fully rational, compulsory public insurance may make all agents better off in the Rothschild and Stiglitz (1976) model of insurance markets. We find that when sufficiently many agents underestimate their personal risks, compulsory insurance makes low-risk agents worse off. Hence, behavioral biases may weaken some of the well-established rationales for government intervention based on asymmetric information
    corecore